The socio-technical making of ML fairness

Status: Ongoing

Collaborators: Dr. Misita Anwar, A/Prof. Joanne Evans

Machine learning (ML) systems are now embedded in many social institutions, where they can reproduce or intensify systemic inequalities that are durable, institutional, and historically entrenched. The field of fair ML emerged to align ML systems with conceptions of fairness, but much of its earlier work prioritised narrowly technical interventions focused on allocative harms (e.g., the distribution of opportunities, resources, or burdens across social groups). This emphasis has often abstracted away from the broader social and institutional conditions in which discrimination is produced and sustained, and has under-attended representational harms (i.e., the reproduction of social hierarchies) that arise across the socio-technical ML pipeline.

Responding to the field’s normative turn and growing calls to move beyond technical fixes, our project advances a socio-technical approach to fairness. We begin by mapping how notions of fairness are made in practice - i.e., how they are sculpted, narrowed, and stabilised across the socio-technical ML pipeline through the decisions and assumptions embedded at each stage. We trace this process from how an ML system is positioned within its societal and institutional context (e.g., framed as an intervention versus functioning as a perpetuator of inequality), through problem and policy formulation, measurement choices, technical model development, decision-making contexts, and the feedback loops through which system outputs reshape data, behaviours, institutions, and society. This mapping renders visible the socio-technical processes through which fairness is constructed and operationalised. It provides a foundation for critique and contestation, while also specifying where unfairness emerges, how responsibility should be distributed across actors and institutions, and where interventions are practically feasible.

Building on this diagnostic mapping, we then work with lawyers, policymakers, digital rights advocates, and academics to co-design a socio-technical process for Fair ML that goes beyond narrow technical fixes. The aim is to contribute not only to distributional equality but also to morally equal social relations. Finally, we examine how these fairness conceptions align (or fail to align) with contemporary data protection and AI regulation, translating our findings into actionable recommendations for emerging ML legislation and institutional practice.

This project is part of my work as a postdoctoral fellow at the School of Business, Law, and Entrepreneurship at Swinburne University.

Next
Next

Grounding data governance proposals in a Neo-Marxian conception of democracy